Communication in Multi-Agent Markov Decision Processes

نویسندگان

  • Ping Xuan
  • Victor R. Lesser
  • Shlomo Zilberstein
چکیده

In this paper, we formulate agent’s decision process under the framework of Markov decision processes, and in particular, the multi-agent extension to Markov decision process that includes agent communication decisions. We model communication as the way for each agent to obtain local state information in other agents, by paying a certain communication cost. Thus, agents have to decide not only which local action to perform, but also whether it is worthwhile to perform a communication action before deciding the local action. We believe that this would provide a foundation for formal study of coordination activities and may lead to some insights to the design of agent coordination policies, and heuristic approaches in particular. An example problem is studied under this framework and its implications to coordination are discussed.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Utilizing Generalized Learning Automata for Finding Optimal Policies in MMDPs

Multi agent Markov decision processes (MMDPs), as the generalization of Markov decision processes to the multi agent case, have long been used for modeling multi agent system and are used as a suitable framework for Multi agent Reinforcement Learning. In this paper, a generalized learning automata based algorithm for finding optimal policies in MMDP is proposed. In the proposed algorithm, MMDP ...

متن کامل

What to Communicate? Execution-time Decision in Multi-agent POMDPs

In recent years, multi-agent Partially Observable Markov Decision Processes (POMDP) have emerged as a popular decision-theoretic framework for modeling and generating policies for the control of multi-agent teams. Teams controlled by multi-agent POMDPs can use communication to share observations and coordinate. Therefore, policies are needed to enable these teams to reason about communication. ...

متن کامل

Coordination vs. information in multi-agent decision processes

Agent coordination and communication are important issues in designing decentralised agent systems, which are often modelled as flavours of Markov Decision Processes (MDPs). Because communication incurs an overhead, various scenarios for sparse agent communication have been developed. In these treatments, coordination is usually considered more important than making use of local information. We...

متن کامل

Formal Modeling of Communication Decisions in Cooperative Multi-agent Systems

When collaborative agents cannot observe the global state of an uncertain environment, they must communicate in order to achieve their common goals, which is evaluated through a global utility function. We develop a multiagent extension to Markov decision processes in which communication is an explicit action that incurs a cost. Thus, agents have to decide not only which local action to perform...

متن کامل

Coordinated Multi-Robot Exploration Under Communication Constraints Using Decentralized Markov Decision Processes

Recent works on multi-agent sequential decision making using decentralized partially observable Markov decision processes have been concerned with interaction-oriented resolution techniques and provide promising results. These techniques take advantage of local interactions and coordination. In this paper, we propose an approach based on an interaction-oriented resolution of decentralized decis...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000